Monte Carlo Methods: Lab 2


In [1]:
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)


Out[1]:

Previous Lab

The key point of the previous lab was that if you can draw random samples from a suitable distribution you can calculate some quantity using the "average" value of some function of the distribution.

The modelling issue boils down to: how do we construct the samples or the distribution? If you know the distribution you can use rejection sampling as in lab 1. If you don't know the distribution, you can still compute the samples indirectly using Monte Carlo methods. In particular, let's look at Monte Carlo Markov Chain (MCMC) methods and the Metropolis-Hastings algorithm.

Ideal gas

Let's put $N$ particles of mass $1$ inside a cubic box, each side having length $1$. Each particle is described by three quantum numbers $n_x, n_y, n_z = 1, \dots, \infty$, which are positive integers. The energy of each particle is

$$ E^{(i)} \left( n_x^{(i)}, n_y^{(i)}, n_z^{(i)} \right) = \frac{h^2}{8 \pi} \left( \left( n_x^{(i)} \right)^2 + \left( n_y^{(i)} \right)^2 + \left( n_z^{(i)} \right)^2 \right). $$

In an ideal gas there is no interaction between the particles, so the total energy is

$$ E = \sum_{i=1}^N E^{(i)} \left( n_x^{(i)}, n_y^{(i)}, n_z^{(i)} \right). $$

Now, we can easily compute the energy of any given set of particles (we'll use units where $h=2 \pi$):


In [ ]:

We could now draw lots of samples of the quantum numbers and find the most likely energy, which we expect to be the ground state. However, this would be ludicrously expensive. We want the equivalent of importance sampling: finding the "right" samples to do the measurement in the first place.

Markov Chains

The trick here is to think about linking different sets of random samples. Each set of $N$ particles is a single draw from a random distribution. What we want to do is not to jump from one draw to the next at random, but to say how likely it is that we would go from one draw to another. If this probability depends only on where we are, not where we've been, then we have a Markov Chain. If this process of transitioning from one draw to the next obeys

  1. detailed balance - the probability of moving from state $i$ to state $j$ is the same as the probability of moving from state $j$ to state $i$, and
  2. ergodicity - every state can be reached by some transition,

then the Markov Chain will settle down, or equilibriate, to the most likely state.

So we need to choose a method of moving from one draw - one set of particles - to the next, using only the information that we have: the energy. There are many ways of doing this.

Metropolis-Hastings

The Metropolis-Hastings algorithm starts from the assumption that the ground state will try to minimize its energy. Therefore the samples $n_j$ with energy $E_j$ is "better" than the samples $n_i$ with energy $E_i$ if $E_j < E_i$.

However, the rule to only accept a transition from samples with high energy to samples with low energy doesn't match detailed balance. There must be some chance of moving to a state with higher energy. This probability must match the qualitative form of the overall distribution. If we make the assumption that the energies follow the Boltzmann distribution, then the chance of a state appearing with $E_j > E_i$ is $\sim \exp \left[-\beta \left( E_j - E_i \right) \right]$, where $\beta$ is $k_B T$ ($k_B$ is the Boltzmann constant and $T$ the temperature of the system).

So, the Metropolis-Hastings algorithm is:

  1. Draw some sample $n_0$ with energy $E_0$. Set $i=0$.
  2. Draw a random sample $n_{j}$ with energy $E_j$.
  3. If $E_j < E_i$ accept the new sample, set $i=j$.
  4. If $E_j > E_i$, accept the new sample with probability $P_a = \exp \left[-\beta \left( E_j - E_i \right) \right]$. That is, compute a uniform random number $U$ and accept the new sample if $U < P_a$.
  5. Repeat from 2 until the system equilibriates.

Now, our new sample $n_j$ could change the quantum states of all the particles, or just one particle, or just one quantum state of one particle. It's usual (but not necessary) to start with making the smallest change, so let's start by changing a single quantum state and either flip it up with probability $1/2$ or down with probability $1/2$.

Let's look at that case with $\beta = 0.1$.


In [ ]:


In [ ]:

Parameter study

If we want to model the gas macroscopically we will average over the particles to get an equation of state. The result would be the functional form of the energy $E$ as a function of the density $\rho = N/V$, holding other variables fixed. Use the MCMC code to get $E(\rho)$ for $\rho \in [10^1, 10^4]$ (which indirectly varies $N$), with other parameters as above.


In [ ]:

Lennard-Jones fluids

Point to note: an electronic copy of the Frenkel and Smit book is available through the library. This lab is based on case study 1 in chapter 3.4 of that book.

When computing the interactions between lots of bodies (atoms, molecules, planets, etc) we can either use the true potential or force between them, or we can approximate it with some potential (or force) that is easier (and usually cheaper) to calculate. The parameters of the potential can then be set to approximate the "real" features we're interested in.

In computational chemistry, one such approximation is the Lennard-Jones potential. Given two bodies separated by a distance $r$, the potential generated by those two bodies is

\begin{equation} U(r) = 4 \varepsilon \left[ \left( \frac{\sigma}{r} \right)^{12} - \left( \frac{\sigma}{r} \right)^{6} \right]. \end{equation}

Here $\varepsilon$ and $\sigma$ are parameters. When there are more than two bodies the total potential is the sum over all pairwise potentials.

In principle this generates a potential between particles that are separated by huge distances. Instead it is typical to truncate the potential: to pick a cut-off distance so that any particles separated by more than that distance do not contribute, and to correct for those small contributions.

Here we use a Lennard-Jones potential inside a box size $[0,L]^3$ with a cut-off $r_c = L/2$, with parameters set so that

\begin{equation} U = \begin{cases} 4 \left[ \frac{1}{r^{12}} - \frac{1}{r^6} \right] & r < r_c \\ 0 & r > r_c. \end{cases} \end{equation}

Include tail corrections (that is, additional energy and pressure terms resulting from the particles outside the cutoff radius) as

\begin{align} U^{\text{tail}} & = \frac{8 \pi \rho}{3} \left[ \frac{1}{3} \frac{1}{r_c^9} - \frac{1}{r_c^3} \right] \\ p^{\text{tail}} & = \frac{16 \pi \rho^2}{3} \left[ \frac{2}{3} \frac{1}{r_c^9} - \frac{1}{r_c^3} \right]. \end{align}

For each configuration we need to compute the pressure using

$$ \begin{equation} p = \frac{\rho}{\beta} + \frac{\text{Virial}}{V} \end{equation} $$

where

$$ \begin{equation} \text{Virial} = \sum_i \sum_{j > i} \vec{f}( \vec{r}_{ij} ) \cdot \vec{r}_{ij} \end{equation} $$

where, as usual, $\vec{r}_{ij}$ is the separation between the atoms, $\vec{r}_{ij} = \vec{r}_i - \vec{r}_j$, and the intermolecular force $\vec{f}$ is given by

$$ \begin{align} \vec{f}(\vec{r}_{ij}) &= - \nabla U \\ & = \begin{cases} 24 \left[ 2 \frac{1}{r^{14}} - \frac{1}{r^8} \right] \vec{r}_{ij} & r < r_c \\ \vec{0} & r > r_c \end{cases} \end{align} $$

Note that in the reduced coordinates $\beta = T^{-1}$.

Monte Carlo code

We will be using an $NTV$ approach, keeping the number of particles fixed ($N = 100$), the temperature fixed ($T=2$) and the volume fixed (indirectly, via the density $\rho = N / V = N L^{-3}$; use $\rho = a/10$ for $a = 1, \dots, 9$, but start by just considering the $a=1, 2$ cases). You will need to take at least $10,000$ steps for the larger values of $a$; $20,000$ is better, but in all cases you should test with a smaller number of particles and steps ($1,000$ may be sufficient for small values of $a$).

For reference we note the solutions, taken from Johnson, Zollweg and Gubbins for the pressures at $T=2$ are:


In [2]:
p_JZG_T2 = [0.1776, 0.329, 0.489, 0.7, 1.071, 1.75, 3.028, 5.285, 9.12]

Efficiency

Note that the sum over all particles scales as $n^2$ where $n$ is the number of particles. As the number of steps the algorithm will need to take will also scale as $n$, this makes the number of calculations at least as bad as $n^3$. This is expensive; if you try the naive approach then you'll have difficulty using more than 50 particles in a moderate time.

Instead we can note that, at each stage, the algorithm will move only one particle. Therefore, if we store not just the locations of the particles but also their pairwise separations, at each step we will only have to modify a small number of the separations. So we can store $r^2_{ij} = \vec{r}_{ij} \cdot \vec{r}_{ij}$ only, for $j > i$, and when perturbing particle $k$ we only need to update the separations $r^2_{ik}$ for $i<k$ and $r^2_{kj}$ for $k<j$.

This should significantly reduce the number of calculations done in each step.

In addition, note that for reasonable behaviour the acceptance rate should be $\sim 40\%$. This depends on the fractional perturbation distance $\Delta$; values $\sim 0.4$ are reasonable when $\rho \sim 0.1$, but values $\sim 0.02$ are reasonable when $\rho \sim 0.9$.

Results

  • Check that the energy has converged to a "constant" state.
  • Plot a histogram of the energies to show that they follow the Boltzmann distribution.

In [3]:
%matplotlib inline
import numpy
from scipy import constants
from matplotlib import pyplot
from mpl_toolkits.mplot3d.axes3d import Axes3D
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)